[Experimental] Add TurboQuantKVCache: PolarQuant KV cache compression at 2-4 bits#1059
Open
rachittshah wants to merge 1 commit intoml-explore:mainfrom
Open
[Experimental] Add TurboQuantKVCache: PolarQuant KV cache compression at 2-4 bits#1059rachittshah wants to merge 1 commit intoml-explore:mainfrom
rachittshah wants to merge 1 commit intoml-explore:mainfrom
Conversation
b631010 to
fd936ca
Compare
Adds opt-in TurboQuant KV cache compression (PolarQuant, ICLR 2026) in a single self-contained file with zero impact on existing paths. turboquant.py uses only mlx.core — no numpy. Codebook constants are inlined (~200 bytes). Rotation matrix via mx.linalg.qr. Lazy-loaded only when --turbo-kv-bits is passed or to_turboquant() is called. Changes to existing files: - cache.py: lazy class resolver in load_prompt_cache (+10 lines), to_turboquant() on KVCache (+13 lines) - generate.py: --turbo-kv-bits CLI arg (+14 lines) - tests: 7 new tests, all 27 pass Quality (Llama 3.2-3B, head_dim=128): 4-bit: 0.997 cosine vs FP16, 3.8x compression 3-bit: 0.988 cosine vs FP16, 4.6x compression
fd936ca to
cc41625
Compare
6 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes #1060
Adds PolarQuant KV cache compression (from Google's TurboQuant paper, ICLR 2026) as an opt-in experimental feature. The algorithm applies a fixed random orthogonal rotation to KV vectors then quantizes each coordinate with a precomputed Lloyd-Max optimal scalar quantizer. Data oblivious, no calibration needed.
Everything lives in a single new file
mlx_lm/models/turboquant.py(~200 lines, pure mlx.core, no numpy). Nothing is imported unless--turbo-kv-bitsis passed orto_turboquant()is called.Changes
models/turboquant.pymodels/cache.pyload_prompt_cache(+10 lines),to_turboquant()on KVCache (+13 lines)generate.py--turbo-kv-bitsCLI arg,maybe_turboquant_kv_cache()(+14 lines)tests/test_prompt_cache.pyNo new dependencies. Codebook constants are inlined. Rotation matrix generated via
mx.linalg.qr.Results
Tested on Apple Silicon across Llama 3.2-1B/3B and Qwen3-1.7B/4B. Logit cosine similarity vs FP16 KV cache:
Top-1 token accuracy is perfect at 4-bit across all models and prompts. Full benchmark and methodology at https://github.com/rachittshah/mlx-turboquant/blob/main/REPORT.md
Known limitations
Decode throughput is ~0.5x FP16 due to the dequantize-on-fetch path. A fused Metal kernel (analogous to
mx.quantized_matmul) would close this gap but is out of scope for this PR.Some models degrade below 4-bit (Qwen3-1.7B drops at 3-bit). Recommend 4-bit as default.
Usage
python -m mlx_lm.generate --model mlx-community/Llama-3.2-3B-Instruct-4bit \ --turbo-kv-bits 4 --prompt "The capital of France is"Tests
All 27 pass (20 existing + 7 new). Tests cover basic ops, quality, trim, save/load, conversion, model integration, and memory.